Academic Open Internet Journal

ISSN 1311-4360

www.acadjournal.com

Volume 19, 2006

 

 

IMPROVED ACTIVE NOISE FEEDFORWARD CONTROL SYSTEMS USING DELTA RULE ALGORITHM

 

S.MANIKANDAN, S.MYTHILI

ASST PROF, DEPT OF ECE, KSR COLLEGE OF TECH,

ANNA UNIVERSITY,

TIRUCHENGODE-209, TAMILNADU,

INDIA.

 

ABSTRACT:

Active Noise Control (ANC) involves an electro  acoustic  or  electromechanical  system  that cancels  the  primary  (unwanted)  noise  based  on  the principle  of  superposition.   An  anti-noise  signal  of equal amplitude and opposite phase is generated and combined  with  the  primary  noise,  resulting  in  the cancellation of the noise. A fundamental problem to be considered in ANC systems is the requirement of highly   precise   control,   temporal   stability,   and reliability.   To produce high degree of attenuation, the amplitude and phase of both the primary and the secondary noise must match with the close precision. The adaptive filters are used to control the noise and it has a linear input and output characteristic.   If  a transfer     path     of     the     noise     has     nonlinear characteristics  it  will  be  difficult  for  the  filter  to generate  an  optimal  anti-noise.  In this paper, we propose a algorithm, delta rule algorithm which uses non linear output function.  Delta rule is used for learning   complex   patterns   in   Artificial   Neural Networks. We have implemented the adaptive filters using    Least    Mean    Square    (LMS)    algorithm, Recursive Least Square (RLS) algorithm and Delta rule algorithm and compared the results.

 

Key termsANC, LMS, RLS, Delta rule, Error signal, neural networks.

 


1 INTRODUCTION

Active noise are real time noise and they cannot be predictable (i.e. random).   The traditional way to cancel the noise which is called passive noise control, which techniques based on the use of sound- absorbing    materials,    are    effective    in    higher frequency noise. However, significant power of the industrial noise often occurs in the frequency range between 50-250Hz.    Here the wavelength of sound is too long, so that passive techniques are no longer cost effective because they require material that is too heavy.

 

Active  Noise  Control  System  is  working based on the principle of superposition. The system consists of a controller for which reference about the noise is given.    The controller properly scales the reference noise and the phase reverses it.  The phase reversed signal is then added to the input signal that has some noise along the original message signal so that  the  noise  gets  cancelled  out.  There  are  many methods   used   for   ANC   system   include   both feedback  and  feed-forward  control  .  ANC  is  based on  either  feed-forward  control,  where  a  coherent reference noise input is sensed before it propagates past   the   secondary   source,   or   feedback   control where the active noise controller attempts to cancel the   noise   without   the   benefit   of   an   “upstream” reference   input.   The   performance   of   the   active control  system  is  determined  largely  by  the  signal processing   algorithm   and   the   actual   acoustical implementation. Effective algorithm design requires reasonable knowledge of algorithm behavior for the desired operating conditions. Since the active noise is random, the proper prediction of the noise cannot be possible,  the controller should contain a adaptive filter part whose filter coefficients will be changing based   on   the   error   signal   which   the   difference between the output of the controller and the output from  an  unknown  plant.  To  achieve  reduction  of noise in complicated multiple noise source, we must use   active   noise   control   by   multiple   reference channel. That is input signal to the each channel is correlated and the output also correlated.

 

2 LMS ALGORITHM

 

 

 

 

 

 

 

 

 

 

 

 

Fig 2.1 Block Diagram of the ANC Control System Using LMS Algorithm

 

The   Least   Mean   Square,   or   LMS,   [3]   [4]   [9] algorithm  is  a  stochastic  gradient  algorithm  that iterates each tap weight in the filter in the direction of the gradient of the squared amplitude of an error signal  with  respect  to  that  tap  weight.  The LMS algorithm   is   an   approximation   of   the   steepest descent   algorithm   which   uses   an   instantaneous estimate of the gradient vector. The estimate of the gradient is based on sample values of the tap input vector and an error signal.  The  algorithm  iterates over  each  tap  weight  in  the  filter,  moving  it  in  the direction  of  the  approximated  gradient.  The LMS algorithm was devised by Widrow and Hoff in 1959. The objective is to change (adapt) the coefficients of an FIR filter, w(n), to match as closely as possible to the   response   of   an   unknown   system,   p(n).   The unknown system and the adapting filter process the same input  signal  x[n]  and  have  outputs  d[n]  (also referred    to    as    the    desired    signal)    and    y[n] respectively.

The  LMS  algorithm  basically  has  two processes. The first one is the filtering process and the next is adaptive process.  In filtering process, the reference signal is filtered in the adaptive filter and it is combined with desired signal.   The error signal is the difference of the desired signal and the output signal of the filter  w (n).     In  adaptive  process,  the reference  signal  and  the  error  signal  are  fed  to  the LMS  algorithm  and  the  weights  of  the  filter  are modified based on the LMS algorithm.

It    is    assumed    that    all    the    impulse responses  in  this  paper  are  modeled  by  those  of finite  impulse  response  (FIR)  filters.   d(n)  has  the primary   noise   to   be   controlled   and   x(n)   is   the reference about the noise.

 

d(n) = pT(n) * x1(n)                  (1)

 

where p(n) is the impulse response of the unknown plant and x1(n) = [ x(n) x(n-1) …… x(n-M+1)]T  and M is the length of p(n).  The y(n) is the output signal from the filter.

y(n)= wT(n) * x2(n)                   (2)

where

w(n)= [w(n) w(n-1) w(n-2) ……… w(n-N+1)]T    is the weight vector of the ANC controller with a length  N and x2(n)=[x(n) x(n-1) …   .. x(n-N+1)]T. The  error  signal  e(n)  is  difference  of  the  desired signal d(n) and the output of the filter y(n).

 

e(n)= d (n) – y (n)                       (3)

 

The  weight  of  the  filter  w(n)  is  updated  using  the following equation.

 

w (n+1)= w(n) +      x(n) e(n)     (4)

 

The is termed as step size.  This step size has a profound effect on the convergence behavior of the LMS algorithm.

If   is   too   small,   the   algorithm   will   take   an extraordinary amount of time to converge. When is increased, the algorithm converges more quickly however, if      is increased too much, the algorithm will actually diverge.  A good upper bound on the value of      is 2/3 over the sum of the eigen values of the autocorrelation matrix of the input signal. The correction   that   is   applied   in   updating   the   old estimate  of  the  coefficient  vector  is  based  on  the instantaneous  sample  value  of  the  tap-input  vector and  the  error  signal.   The  correction  applied to the previous  estimate  consists  of  the  product  of  three factors: the (scalar) step-size parameter    , the error signal  e(n-1)and  the  tap-input  vector  u(n-1).  The LMS     algorithm     requires     approximately     20L iterations to converge in mean square, where L is the number of tap coefficients contained in the tapped- delay-line filter. The LMS algorithm requires 2L + 1 multiplications, increasing linearly with L.

 

 

3. RLS ALGORITHM

 

Recursive  Least  Square  algorithm  (RLS)[1],[2] can be used with an adaptive transversal filter to  provide  faster  convergence  and  smaller  steady state   error   than   the   LMS   algorithm.   The   RLS algorithm  uses  the  information  contained  in  all  the previous  input  data  to  estimate  the  inverse  of  the autocorrelation  matrix  of  the  input  vector.   It  uses this estimate to properly adjust the tap weights of the filter.

 

In the RLS algorithm the computation of the    correction    utilizes    all    the    past    available informationThe  correction  consists  of  the  product of two factors: the true estimation error    (n) and the gain vector k(n). The gain vector itself consists of P-1(n),  the  inverse  of  the  deterministic  correlation matrix, multiplied by the tap-input vector u(n). The major   difference   between   the   LMS and   RLS algorithms is therefore the presence of P correction  term  of  the  RLS  algorithm  that  has  the effect  of  decorrelating  the  successive  tap  inputs, thereby     making     the     RLS     algorithm     self- orthogonalizing.  Because  of  this  property,  we  find that the RLS algorithm is essentially independent of the  eigenvalue  spread  of  the  correlation  matrix  of the  filter  input.    The  RLS  algorithm  converges  in mean square within less than 2L iterations. The rate of convergence of the RLS algorithm is therefore, in general, faster than that of the LMS algorithm by an order of  magnitude.  There are no  approximations made   in   the   derivation   of   the   RLS   algorithm. Accordingly, as the number of iterations approaches infinity, the least-squares estimate of the coefficient vector  approaches  the  optimum  Wiener  value,  and correspondingly,  the  mean-square  error  approaches the  minimum  value  possible.  In  other  words,  the RLS     algorithm,     in     theory,     exhibits     zero misadjustment.    The  superior  performance  of  the RLS  algorithm  compared  to  the  LMS  algorithm  is attained   at   the   expense   of   a   large   increase   in computational  complexity.  The  complexity  of  an adaptive    algorithm    for    real-time    operation    is determined by two principal factors: (1) the number of    multiplications    (with    divisions    counted    as multiplications)  per  iteration,  and  (2)  the  precision required to perform arithmetic operations. The RLS algorithm   requires   a   total   of   3L(3   +    L   )/2 multiplications, which increases as the square of L the  number  of  filter  coefficients.   But  the  order  of RLS  algorithm  can  be  reduced .

 

4 ARTIFICIAL NEURAL NETWORKS

Artificial      Neural      Network     (ANN)[12][10][11]is an  information  processing  system that   has   certain   performance   characteristics   in common with biological neural networks.   A neural net consists of large number of processing elements called neurons, units cells or nodes. Each neuron has an internal state called its activation or activity level which is a function of the inputs it has received.   A neuron  sends  its  activation  as  a  signal  to  several other  neurons.   Each  neuron  is  connected  to  other neurons by means of directed communication links, each  with  an  associated  weight.  The  process  of adjusting  weights  is  referred  to  as  learning.    The procedure   to   incrementally   update   each   of   the weights   is   called   a   learning   law   or   learning algorithm.   Neural   networks   have   the   following

characteristics;

1.   They  have  a  non  linear  input  and  output characteristic.

2.   They can change their own characteristic by learning.

It has an ability to do tasks based on the data given for  training  or  initial  experience.   It  can  creates  its own     organization     or     representation     of     the information  it  receives  during  learning  time.    Its computation  may  be  carried  out  in  parallel,  and special  hardware  devices  are  being  designed  and manufactured    which    take    advantage    of    this capability.

 

 

 

 

4.1 Delta Rule Algorithm

Delta  rule  algorithm  [12]  is  widely  used in Artificial Neural Networks in pattern recognition and  will  be  differ  from  the  LMS  algorithms  in  the weight  update  equation.     The  change  in  weight vector of the delta rule algorithm is

wi =*(d (n) – f (y (n))) * f ’( y(n))* x(n)                                                 (5)

where f (.) is the output function.

The above equation is valid only for the differential output  function.    In  LMS,  the  output  function  is linear f(x) = x . In this case, we are using non-linear output functions which possess sigmoid nonlinearity. Since the derivative of the function is also used, the output function used should be differentiable.   Two examples   of   sigmoid   nonlinear   function   are   the logistic function and the hyperbolic tangent function.

The  logistic  function  output  lies  between  0  to  1. Similarly the  second is  hyperbolic  tangent  function and it is given by The output of the hyperbolic function lies between -1 to 1.      is the scaling factor. Since the maximum value of the logistic function and hyperbolic tangent function  is  1,  the   divergence   of  the   weights   is avoided. By properly chosing value of      , the faster convergence  can  be  achieved.  The  computational complexity   of   delta   rule   algorithm   is   sum   of computational complexity of LMS and computations involved  in  the  calculation  of  f(x)  and  f’(x)  and multiplication  of  the  function  in  the  weight  update vector.      The   computational   complexity   is   not increased greatly compared to the LMS algorithm.

 

 

5 SIMULATION AND RESULTS

The weight update equation of LMS, RLS and delta rule are (4), (7),(9) respectively.   A white Gaussian noise with zero mean and unit variance is used as the reference signal.   The primary path p(n) is simulated by a filter of length 127.   The length of w(n)  is  chosen  to  be  96.    The  logistic  function  is used for the simulation of delta rule algorithm.    All the weights are initialized to zero and all the results are average of 100 cycles.   All the simulations were done using MATLAB 6.0 (Release 12.1). The input  white  Gaussian  noise  of  zero  mean  and  unit variance  is  shown  in  the  figure  5.1.   The residual noise of LMS algorithm is shown in the figure 5.2. The residual noise of RLS and Delta rule algorithm is shown in the figure 5.3 and 5.4 respectively.  RLS algorithm converges very quickly than the other two algorithms and the residual noise is also less.  From the   figure   5.4,   the   residual   noise   of   Delta   rule algorithm is less than the residual of LMS algorithm

 

 

 

 

 

 

 

 

 

 

 

 

Fig 5.1 Input Gaussian Noise

 

 

 

 

 

 

 

 

 

 

 

 

Fig 5.2 Residual Noise of LMS algorithm

 

 

 

 

 

 

 

 

 

 

 

 

Fig 5.3 Residual Noise of RLS algorithm

 

 

 

 

 

 

 

 

 

 

 

 

Fig 5.4 Residual Noise of Delta Rule Algorithm

 

 

 

 

 

6 CONCLUSIONS

In   this   paper,   we   have   compared   the performance of LMS, RLS and delta rule algorithm for a white Gaussian noise as reference.   From the results shown in the above section, we can conclude that RLS  algorithm  is  better  in  performance  than both the algorithms. But the order of the complexity of the RLS algorithm is much more than the order of the  complexity  of  the  other  two  algorithms.    The delta  rule  algorithm  requires  slightly  more  number of  computations  than  the  LMS  algorithm  and  the residual noise of the delta rule algorithm is less than the residual noise of LMS algorithm. The delta rule algorithm is more efficient when both then noise reduction and computational complexity are taken in to consideration.

 

 

REFERENCES

[1]   S.  M.Kuo  and  D.  R.  Morgan,  Active  Noise Control      Systems—Algorithms      and      DSP ImplementationsNew  York:  Wiley,  1996,  p.37.

[2]   S.  Haykin,  Adaptive  Filter 

theory.  PrenticeHall, 3rd ed., 1996.

 

[3]   S.  Douglas  and  W.  Pan,  “Exact  expectation analysis   of   the   LMS  adaptive   filter,”   IEEE Trans.  Signal  Processing,  vol.  43,  pp.  2863–2871, Dec. 1995.

[4]   T.K.   Woo,   Fast   Hierarchical   Least   Mean Square   Algorithm,   IEEE   Signal   Processing Letters, Vol. 8, NO. 11, November 2001.

[5]   Mareels,   Polderman,   Adaptive   Systems:   An Introduction,      Birkhauser1996      (averaging, geometric  interpretation  of  LMS,  emphasis  on control)

[6]   Moonen,  M.:  Introduction  to  adaptive  signal processing; Leuven, Belgium; 1995

[7]   S. M. Kuo M. Nadeski T. Horner J. Chyan and I. Panahi. Fixed-point DSP implementation of active noise control


systems. Proc. Noise-Con, pages 337–342, 1994.

[8] T.K.   Moon,   W.C.   Stirling,   Mathematical Methods and Algorithms for SignalProcessing, Chap. 14.6., Prentice Hall, 2000.

[9]     G.O. Glentis, K. Berberidis, S.   

Theodoridis,‘Efficient least squares adaptive algorithms for FIR transversal filtering,’ IEEE Signal Processing Magazine, vol. 16, no. 4, pp. 13-41, July 1999.

[10]   Y. Maeda and R.J.P. de Figueiredo, “Learning rules  for  neuro-controller  via  simultaneous perturbation”,    IEEE    Trans.    On    Neural Network, pp. 1119/1130 (1997)

[11]   Youichi       TSUYAMA,Yutaka      

MAEDA,“Active Noise Control Using Neural Network with  the  Simultaneous  Perturbation  Learning Rule”,SJCE02

[12]   Yegnarayana B, “Artificial NeuralNetworks”, August 2001, PrenticeHall of India, pp. 124-


 

 

eXTReMe Tracker

Technical College - Bourgas,

All rights reserved, © March, 2000